3 research outputs found
Improving the Performance of DNN-based Software Services using Automated Layer Caching
Deep Neural Networks (DNNs) have become an essential component in many
application domains including web-based services. A variety of these services
require high throughput and (close to) real-time features, for instance, to
respond or react to users' requests or to process a stream of incoming data on
time. However, the trend in DNN design is toward larger models with many layers
and parameters to achieve more accurate results. Although these models are
often pre-trained, the computational complexity in such large models can still
be relatively significant, hindering low inference latency. Implementing a
caching mechanism is a typical systems engineering solution for speeding up a
service response time. However, traditional caching is often not suitable for
DNN-based services. In this paper, we propose an end-to-end automated solution
to improve the performance of DNN-based services in terms of their
computational complexity and inference latency. Our caching method adopts the
ideas of self-distillation of DNN models and early exits. The proposed solution
is an automated online layer caching mechanism that allows early exiting of a
large model during inference time if the cache model in one of the early exits
is confident enough for final prediction. One of the main contributions of this
paper is that we have implemented the idea as an online caching, meaning that
the cache models do not need access to training data and perform solely based
on the incoming data at run-time, making it suitable for applications using
pre-trained models. Our experiments results on two downstream tasks (face and
object classification) show that, on average, caching can reduce the
computational complexity of those services up to 58\% (in terms of FLOPs count)
and improve their inference latency up to 46\% with low to zero reduction in
accuracy
Economic Efficiency of Intensive Care Units Department of Hospitals Affiliated With Yazd University of Medical Sciences: A DEA Approach
Introduction:Hospitals as the most expensive health care system is an operational unit, considering the cost efficiency of this unit is very important. The aim of this study was to evaluate the performance of the intensive care units of hospitals affiliated with Yazd University of Medical Sciences.
Methods:This is a descriptive - analytic study performed in 2014. Intensive care units of 8 hospitals affiliated with Yazd University of Medical Sciences were included in this study. Data envelopment analysis (DEA) was used to evaluate the economic efficiency of intensive care units. Input variables included the number of physicians, nurses, and active beds, as well as the number of equipment (ventilator), and output variables included bed occupancy rate and the number of patients discharged healthy. Input prices contained the doctors and nurses income, bed price, and depreciation cost of equipment (ventilator).
Results:The findings show that among 8 studied hospitals, the efficiency of 5 hospitals were 1 and the efficiencies of the other hospitals were 0.174, 0.645, and 0.855. The mean economic efficiency of intensive care units of all hospitals was 834.0 in 2014.
Conclusion:By ameliorating the allocation of resources allocated for hospitals including human resources and equipment, the efficiency level of hospitals can be improved.
Key¬words:Economic Efficiency, Educational Hospital, Data Envelope Analysis (DEA).
Citation: Abedi M, Bahrami MA, Yusefzadeh H, Kiani MM, Moeeni M.Economic Efficiency of Intensive Care Units Department of Hospitals Affiliated With Yazd University of Medical Sciences: A DEA Approach.Journal of Health Based Research 2016; 2(1): 29-38
Improving the Performance of DNN-based Software Services using Automated Layer Caching
Deep Neural Networks (DNNs) have become an essential component in many application domains including web-based services. A variety of these services require high throughput and (close to) real-time features, for instance, to respond or react to users' requests or to process a stream of incoming data on time. However, the trend in DNN design is toward larger models with many layers and parameters to achieve more accurate results. Although these models are often pre-trained, the computational complexity in such large models can still be relatively significant, hindering low inference latency. Implementing a caching mechanism is a typical software engineering solution for speeding up a service response time. However, traditional caching is often not suitable for DNN-based services. In this thesis, we propose an end-to-end automated solution to improve the performance of DNN-based services in terms of their computational complexity and inference latency. Our caching method adopts the ideas of self-distillation of DNN models and early exits. The proposed solution is an automated online layer caching mechanism that allows early exiting of a large model during inference time if the cache model in one of the early exits is confident enough for final prediction. One of the main contributions of this thesis is that we have implemented the idea as an online caching, meaning that the cache models do not need access to training data and perform solely based on the incoming data at run-time, making it suitable for applications using pre-trained models. Our experiments results on two downstream tasks (face and object classification) show that, on average, caching can reduce the computational complexity of those services up to 58\% (in terms of FLOPs count) and improve their inference latency up to 46\% with low to zero reduction in accuracy